11 research outputs found

    Bilateral symmetry of object silhouettes under perspective projection

    Get PDF
    Symmetry is an important property of objects and is exhibited in different forms e.g., bilateral, rotational, etc. This paper presents an algorithm for computing the bilateral symmetry of silhouettes of shallow objects under perspective distortion, exploiting the invariance of the cross ratio to projective transformations. The basic idea is to use the cross ratio to compute a number of midpoints of cross sections and then fit a straight line through them. The goodness-of-fit determines the likelihood of the line to be the axis of symmetry. We analytically estimate the midpoint’s location as a function of the vanishing point for a given object silhouette. Hence finding the symmetry axis amounts to a 2D search in the space of vanishing points. We present experiments on two datasets as well as internet images of symmetric objects that validate our approach. under perspectivities, we analytically compute a set of midpoints of the object as a function of the vanishing point. Then, we fit a straight line passing through the midpoints. The goodness-of-fit defines the likelihood of this line to be a symmetry axis. Using the proposed method, searching for the symmetry axis is reduced to searching for a vanishing point. Our approach is global in the sense that we consider the whole silhouette of the object rather than small parts of it. The results show that the method presented here is capable of finding axes of symmetry of considerably distorted perspective images. 2 Related Work

    Modeling Perceptual Color Differences by Local Metric Learning

    No full text
    International audienceHaving perceptual differences between scene colors is key in many computer vision applications such as image segmentation or visual salient region detection. Nevertheless, most of the times, we only have access to the rendered image colors, without any means to go back to the true scene colors. The main existing approaches propose either to compute a perceptual distance between the rendered image colors, or to estimate the scene colors from the rendered image colors and then to evaluate perceptual distances. However the first approach provides distances that can be far from the scene color differences while the second requires the knowledge of the acquisition conditions that are unavailable for most of the applications.In this paper, we design a new local Mahalanobis-like metric learning algorithm that aims at approximating a perceptual scene color difference that is invariant to the acquisition conditions and computed only from rendered image colors. Using the theoretical framework of uniform stability, we provide consistency guarantees on the learned model. Moreover, our experimental evaluation shows its great ability (i) to generalize to new colors and devices and (ii) to deal with segmentation tasks
    corecore